After completing this notebook, you will be able to:
There are lots of different imaging formats around, but we will focus on two of the most common:
Regardless of format, imaging data consists of two parts:
There is one or more numerical values associated with each pixel/voxel. These values could represent many different quantitites:
data_root="/Users/davecash/Data/IDEAS/TeamCoder_EBM/bids"
subject_id="011-S-4906"
bids_desc = "t1"
subject_root = os.path.join(data_root,"sub-"+subject_id)
t1_img_name = os.path.join(subject_root,"anat","sub-" + subject_id + "_desc-" + bids_desc + ".nii.gz")
t1_img = nb.load(t1_img_name)
print(f"Dimensons {t1_img.shape}")
print(f"Distance and Time Units: {t1_img.header.get_xyzt_units()}")
print(f"Voxel spacing: {t1_img.header.get_zooms()}")
Dimensons (176, 240, 256)
Distance and Time Units: ('mm', 'sec')
Voxel spacing: (1.1999999, 1.0, 1.0)
Imaging data tends to be large, so in order to save space, we only load the data when we want to work with it. This is done usning the get_fdata() command and soring it in the object t1_img_data
t1_img_data = t1_img.get_fdata()
Let's look at one voxel and its intensity. Voxel locations are typically referred to as i (Columns), j (Rows), and k (Slices). Feel free to change location and see how the number changes.
test_voxel = (100,100,100)
print(f"Intensity at voxel {t1_img_data[test_voxel]}")
Intensity at voxel 198.0
Based on dimensions above:
bad_voxel=(1,1,256)
print(t1_img_data[bad_voxel])
--------------------------------------------------------------------------- IndexError Traceback (most recent call last) /var/folders/9y/g1lfqkc525zbf3824tckr5m00000gp/T/ipykernel_4211/2339355092.py in <module> 1 bad_voxel=(1,1,256) ----> 2 print(t1_img_data[bad_voxel]) IndexError: index 256 is out of bounds for axis 2 with size 256
# Get the minimum and maximum
img_min = t1_img_data.min()
img_max = t1_img_data.max()
print(f"Minimum: {img_min}")
print(f"Maximum: {img_max}")
Minimum: 0.0 Maximum: 952.0
Credit Slicer The transformation is the mapping that allows converting from voxel location to real worls space and back
np.set_printoptions(suppress=True)
print(t1_img.affine)
[[ 1.20000458 0. 0. -103.64485931] [ 0. 1. 0. -102.7288208 ] [ 0. 0. 1. -152.76271057] [ 0. 0. 0. 1. ]]
# Converting from voxel location to real world coordinates
# Ignore the fourth values, they are just dummy values for convenience
voxel_loc = (100,100,100,1)
real_world = t1_img.affine.dot(voxel_loc)
np.set_printoptions(precision=2)
print(f"Voxel location: {voxel_loc}")
print(f"Real World coordinates: {real_world}")
Voxel location: (100, 100, 100, 1) Real World coordinates: [ 16.36 -2.73 -52.76 1. ]
%%capture --no-display
niwidget = NiftiWidget(t1_img_name)
niwidget.nifti_plotter()
<Figure size 432x288 with 0 Axes>
interactive(children=(IntSlider(value=87, continuous_update=False, description='x', max=175), IntSlider(value=…
We will go through some common approaches:
# Load in the image
t1_img = ants.image_read(t1_img_name)
#Peform the N3 bias correcction
t1_n3 = ants.n3_bias_field_correction(t1_img)
t1_diff = t1_img - t1_n3
ants.plot_ortho_stack(images = [t1_img,t1_n3,t1_diff],
title="Bias Correction Review")
Identify the mathematical mapping that aligns the anatomy from one image to another. This has many uses:
ants.plot_ortho_double(image=mni_img, image2=mni_img,
overlay=mni_mask, overlay2=mni_mask,
overlay_alpha=0.0, overlay_alpha2=0.4,title="MNI image and mask",title_dy=0.25)
ants.plot_ortho_double(image=mni_img,image2=t1_n3,title="Before Registration",title_dy=0.25)
# Find initial guess
init_tx_file = ants.affine_initializer(fixed_image = mni_img, moving_image = t1_n3, mask = mni_mask)
init_tx = ants.read_transform(init_tx_file)
# Transform T1 with initial guess
t1_mni_init = ants.apply_transforms(fixed = mni_img, moving = t1_n3,transformlist=init_tx_file)
ants.plot_ortho_double(image=mni_img,image2=t1_mni_init,title="After Initialisation",title_dy=0.25)
# Using initial guess, run registration
affine = ants.registration(fixed = mni_img, moving = t1_n3,
mask = mni_mask,
type_of_transform = "Affine",
initial_transform = init_tx_file)
ants.plot_ortho_double(image=mni_img,image2=affine['warpedmovout'],title="Affine Registration",title_dy=0.25)
# Use the affine registration to transfer the mask to our image
# We are going the opposite direction from the registration, which is why whichtoinver is set to True
t1_mask = ants.apply_transforms(fixed = t1_n3, moving = mni_mask, interpolator="nearestNeighbor",
transformlist=affine['fwdtransforms'],whichtoinvert=[True])
ants.plot_ortho_double(image=t1_n3, image2=t1_n3, overlay=t1_mask, overlay2=t1_mask,
overlay_alpha=0.0, overlay_alpha2=0.4,title="T1 image and transferred brain mask",title_dy=0.25)
# The utility get_mask can clean up the additional bits outside the brain
t1_brain = ants.mask_image(t1_n3,t1_mask)
t1_mask_cleaned = ants.get_mask(t1_brain,cleanup=3)
t1_brain_cleaned = ants.mask_image(t1_n3,t1_mask_cleaned)
ants.plot_ortho_stack([t1_n3,t1_brain,t1_brain_cleaned],title="Original, Skull-stripped original, Skull-stripped clean")
If you compare the second row and the third row, you can see an additional bit of the connective tissue at the top and front of the brain has been removed.
At each voxel, we divide the brain up into three primary tissue types:
For each voxel, we obtain the probbilities that it contains each tissue type and the most likely tissue.
tissue_seg = ants.atropos(a=t1_n3,x=t1_mask_cleaned,m="[0.3,1x1x1]", i='Kmeans[3]')
ants.plot_ortho_double(image=t1_n3, image2=tissue_seg['segmentation'],
title="T1 image and binary tissue segmentation", title_dy=0.25)
ants.plot_ortho_double(image = t1_n3,image2=t1_n3,
overlay=tissue_seg['probabilityimages'][1], overlay_alpha=0.5, overlay_cmap='Blues_r',
overlay2=tissue_seg['probabilityimages'][2], overlay_alpha2=0.5,overlay_cmap2='Reds_r',
title="GM (red) and WM (blue) probablistic segmentation",title_dy=0.25)
tissue_stats = ants.label_stats(t1_n3,tissue_seg['segmentation'])
print(tissue_stats)
LabelValue Mean Min Max Variance Count \
3 0.0 45.446507 0.000000 763.698242 8478.516362 9586503.0
0 1.0 81.280816 0.000000 515.789612 918.073146 350432.0
1 2.0 186.331472 84.513969 554.854614 915.474068 478405.0
2 3.0 264.410317 165.687088 661.430847 819.631979 398100.0
Volume Mass x y z t
3 1.150380e+07 0.000000e+00 0.000000 0.000000 0.000000 0.0
0 4.205184e+05 2.848340e+07 -3.136276 -11.010318 -25.439895 0.0
1 5.740859e+05 8.914191e+07 -2.317012 -10.927011 -30.354117 0.0
2 4.777200e+05 1.052617e+08 -2.907442 -4.847868 -27.829993 0.0
ants.plot_ortho_double(image=aal_img,image2=aal_img, overlay=aal_template,overlay2=aal_template,
overlay_alpha=0.0,title="AAL Image and Labels",title_dy=0.25)
To get a closer match between our image and AAL, we will do a defomrable registration, which allows local areas to be warped to match the images better.
# Do an affine initialisation
t1_mask_dil = ants.iMath(t1_mask_cleaned,'MD',2)
aal_affine_init_file = ants.affine_initializer(fixed_image = t1_n3,moving_image = aal_img)
# Perform deformable registration
syn_aal = ants.registration(fixed=t1_n3,moving=aal_img,mask=t1_mask_dil,
type_of_transform="SyN",initial_transform=aal_affine_init_file,
reg_iterations=(60,30,15))
When we look at the images, they appear to be reasonably well aligned. The venticles are a good area to check as they are typically much larger in older indiiduals and would hae been expanded to match our image. You may also notice the wavy bits in the bright fat regions near the skull in the top of warped tempalte. That is likely because we have told the algorithm to focus only on the brain and not the surrounding areas so these regions do not have to fit very well for the registration to achieve a good alignment.
ants.plot_ortho_double(image=t1_n3, image2=aal_img, scale=True,scale2=True,
title="Before Registration",title_dy=0.25)
ants.plot_ortho_double(image=t1_n3, image2=syn_aal['warpedmovout'], scale=True,scale2=True,
title="After Registration",title_dy=0.25)
# Transfer anatomical labels from AAL to our image
t1_aal_label = ants.apply_transforms(fixed = t1_n3, moving = aal_template,
transformlist=syn_aal['fwdtransforms'],interpolator="genericLabel")
ants.plot_ortho_double(image=t1_n3,image2=t1_n3,overlay=t1_aal_label,overlay2=t1_aal_label,
scale=True,scale2=True,
overlay_alpha=0.0,title="Transferred labels",title_dy=0.25)
With some manipulation (not shown), you can show the labels for each value
print(aal_stats)
Volume Mean Min Max Variance LabelName NaN 1.187768e+07 49.944681 0.000000 763.698242 9133.227852 Precentral_L 1.885560e+04 190.458254 7.131933 314.845673 4720.098670 Precentral_R 1.880280e+04 177.342238 0.990395 303.026154 5292.631203 Frontal_Sup_2_L 2.270400e+04 173.423227 5.752030 303.576630 3973.795143 Frontal_Sup_2_R 2.342160e+04 167.367160 0.000000 297.719757 4114.773860 ... ... ... ... ... ... Red_N_R 4.668000e+02 289.374862 200.934280 337.609131 507.435718 LC_L 1.200000e+01 241.486488 188.494629 320.277435 1363.922542 LC_R 1.800000e+01 154.586862 62.090233 276.395355 2959.780570 Raphe_D 4.800000e+01 249.690186 143.414246 310.540894 1527.498986 Raphe_M 1.920000e+01 176.791312 63.266151 288.393921 4279.684348 [167 rows x 5 columns]
Often a more accurate way to get parcellations of a T1 is by using Freesurfer. This command takes 3-6 hours to run on most machines.
recon-all -subject 011_S_4906 -i bids/sub-011-S-4906/anat/sub-011-S-4096_desc-t1-adni-preproc.nii.gzz -all -threads 4

pet_to_mri = ants.registration(fixed = t1_fs_nu,moving=av45,type_of_transform="Rigid")
ants.plot_ortho_double(image=t1_fs_nu,image2=t1_fs_nu,overlay=t1_fs_nu,overlay_alpha=0.0,
overlay2=pet_to_mri['warpedmovout'], overlay_alpha2=0.55,
scale=True,scale2=True,title="Co-registered MRI and PET", title_dy=0.25)
fs_aparc_in_pet = ants.apply_transforms(fixed=av45, moving=t1_fs_aparc,whichtoinvert=[True],
transformlist=pet_to_mri['fwdtransforms'],interpolator="genericLabel")
ants.plot_ortho_double(image=av45,image2=av45,scale=True,scale2=True,
overlay=fs_aparc_in_pet, overlay_alpha=0.0,
overlay2=fs_aparc_in_pet, overlay_alpha2=0.7,
title="T1 labels transferred to PET", title_dy=0.25)
pet_stats = ants.label_stats(av45,fs_aparc_in_pet)
pet_stats["LabelValue"] = pet_stats['LabelValue'].astype("int")
pet_stats = pet_stats.set_index("LabelValue")
df_aparc = pd.read_csv(os.path.join(data_root,"aparc_aseg_roi.csv"),index_col=0)
pet_stats = pet_stats.join(df_aparc)
pet_stats = pet_stats[["LabelName","Volume","Mean","Min","Max","Variance"]]
pet_stats["TotalUptake"] = pet_stats["Volume"] * pet_stats["Mean"]
cerebellum_rows = pet_stats["LabelName"].map(lambda x: "Cerebellum" in x)
ref_volume = pet_stats[cerebellum_rows]['Volume'].sum()
ref_uptake = pet_stats[cerebellum_rows]['TotalUptake'].sum()
mean_ref = ref_uptake / ref_volume
pet_stats["SUVR"] = pet_stats["Mean"] / mean_ref
pet_stats = pet_stats.set_index("LabelName")
Volumes ($mm^3$) and mean SUVR of region
print(pet_stats[["Volume","SUVR"]])
Volume SUVR LabelName Unknown 7210708.000 0.224773 Left-Cerebral-White-Matter 217387.125 1.781537 Left-Lateral-Ventricle 31353.750 0.687445 Left-Inf-Lat-Vent 1080.000 1.147559 Left-Cerebellum-White-Matter 11103.750 1.452566 ... ... ... ctx-rh-supramarginal 9706.500 1.670934 ctx-rh-frontalpole 897.750 1.611692 ctx-rh-temporalpole 2467.125 1.364175 ctx-rh-transversetemporal 816.750 1.723891 ctx-rh-insula 5956.875 1.556136 [109 rows x 2 columns]